measurement sequence
VSE: Variational state estimation of complex model-free process
Norén, Gustav, Ghosh, Anubhab, Cumlin, Fredrik, Chatterjee, Saikat
We design a variational state estimation (VSE) method that provides a closed-form Gaussian posterior of an underlying complex dynamical process from (noisy) nonlinear measurements. The complex process is model-free. That is, we do not have a suitable physics-based model characterizing the temporal evolution of the process state. The closed-form Gaussian posterior is provided by a recurrent neural network (RNN). The use of RNN is computationally simple in the inference phase. For learning the RNN, an additional RNN is used in the learning phase. Both RNNs help each other learn better based on variational inference principles. The VSE is demonstrated for a tracking application - state estimation of a stochastic Lorenz system (a benchmark process) using a 2-D camera measurement model. The VSE is shown to be competitive against a particle filter that knows the Lorenz system model and a recently proposed data-driven state estimation method that does not know the Lorenz system model.
Sequence-Model-Guided Measurement Selection for Quantum State Learning
Huang, Jiaxin, Zhu, Yan, Chiribella, Giulio, Wu, Ya-Dong
Machine learning provides a powerful tool for characterizing quantum systems based on measurement data [1-40]. In particular, deep neural networks have played an important role across a range of tasks, including quantum state reconstruction [7-16], quantum similarity testing [17, 20, 37], prediction of quantum entanglement [21, 24, 40], and state classification [25-33]. Recent progress has enabled sequence models to predict diverse quantum properties of scalable quantum systems, by modeling the measurement outcome distributions [18, 19, 22, 23, 39, 41]. An important question in quantum state learning is how to choose the appropriate measurements to gather information about an unknown quantum state. While an optimized adaptive choice can be found for small quantum systems [42-44], a full optimization quickly becomes intractable as the size of the system grows large. For scalable quantum systems, a widespread approach is to employ randomized measurements [45-51]. This approach enables the estimation of a wide range of observables without performing a full tomography of the quantum state, which is not feasible for large quantum systems. When prior knowledge is available, the randomized measurement choices can be further optimized [52-54]. In general, however, determining the optimal distributions is computationally challenging for large-scale quantum systems, especially when an approximated classical description is lacking.
- Europe > United Kingdom (0.14)
- Asia > China > Hong Kong (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- (2 more...)
Arbitrary Polynomial Separations in Trainable Quantum Machine Learning
Recent theoretical results in quantum machine learning have demonstrated a general trade-off between the expressive power of quantum neural networks (QNNs) and their trainability; as a corollary of these results, practical exponential separations in expressive power over classical machine learning models are believed to be infeasible as such QNNs take a time to train that is exponential in the model size. We here circumvent these negative results by constructing a hierarchy of efficiently trainable QNNs that exhibit unconditionally provable, polynomial memory separations of arbitrary constant degree over classical neural networks in performing a classical sequence modeling task. Furthermore, each unit cell of the introduced class of QNNs is computationally efficient, implementable in constant time on a quantum device. The classical networks we prove a separation over include well-known examples such as recurrent neural networks and Transformers. We show that quantum contextuality is the source of the expressivity separation, suggesting that other classical sequence learning problems with long-time correlations may be a regime where practical advantages in quantum machine learning may exist.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (7 more...)
A dynamic programming algorithm for informative measurements and near-optimal path-planning
Loxley, Peter N., Cheung, Ka Wai
An informative measurement is the most efficient way to gain information about an unknown state. We give a first-principles derivation of a general-purpose dynamic programming algorithm that returns a sequence of informative measurements by sequentially maximizing the entropy of possible measurement outcomes. This algorithm can be used by an autonomous agent or robot to decide where best to measure next, planning a path corresponding to an optimal sequence of informative measurements. This algorithm is applicable to states and controls that are continuous or discrete, and agent dynamics that is either stochastic or deterministic; including Markov decision processes. Recent results from approximate dynamic programming and reinforcement learning, including on-line approximations such as rollout and Monte Carlo tree search, allow an agent or robot to solve the measurement task in real-time. The resulting near-optimal solutions include non-myopic paths and measurement sequences that can generally outperform, sometimes substantially, commonly-used greedy heuristics such as maximizing the entropy of each measurement outcome. This is demonstrated for a global search problem, where on-line planning with an extended local search is found to reduce the number of measurements in the search by half.
- Oceania > Australia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- (3 more...)